# Extractive QA
Turkish Medical Question Answering
MIT
A BERT-based fine-tuned Turkish medical QA model specialized in extracting answers from medical texts
Question Answering System
Transformers Other

T
kaixkhazaki
20
1
Modernbert QnA Base Squad
Apache-2.0
A QA model fine-tuned on ModernBERT, excelling on the SQuAD dataset, suitable for extractive QA tasks.
Question Answering System
Transformers English

M
rankyx
1,106
6
Flan T5 Large Squad2
MIT
An extractive QA model fine-tuned on the SQuAD2.0 dataset based on flan-t5-large, capable of handling both answerable and unanswerable questions.
Question Answering System
Transformers English

F
sjrhuschlee
57
5
Flan T5 Base Squad2
MIT
An extractive QA model fine-tuned on the SQuAD2.0 dataset based on flan-t5-base, capable of handling question-answer pairs including unanswerable questions.
Question Answering System
Transformers English

F
sjrhuschlee
2,425
4
Eurekaqa Model
EurekaQA is an AI Q&A model based on advanced machine learning algorithms, capable of automatically extracting information to answer questions by analyzing text data.
Question Answering System
Transformers English

E
Kaludi
32
2
Faquad Bert Base Portuguese Cased
Apache-2.0
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the FaQuAD dataset, designed for Portuguese extractive question answering tasks.
Question Answering System
Transformers Other

F
eraldoluis
14
4
Japanese Roberta Question Answering
A Japanese QA model fine-tuned on the JaQuAD dataset, suitable for extractive QA tasks
Question Answering System
Transformers Japanese

J
ybelkada
298
1
Distilroberta Base Squad V2
Apache-2.0
A QA model fine-tuned on SQuAD2.0 dataset based on distilroberta-base, supporting unanswerable questions
Question Answering System
Transformers English

D
squirro
24
1
Qa Roberta Base Chinese Extractive
This is a RoBERTa-Base QA model fine-tuned on Chinese corpora, suitable for extractive QA tasks.
Question Answering System
Transformers Chinese

Q
liam168
34
9
Autonlp More Fine Tune 24465520 26265897
This is an extractive QA model trained with AutoNLP, capable of extracting answers from given texts.
Question Answering System
Transformers Other

A
teacookies
16
1
Autonlp More Fine Tune 24465520 26265898
This is an extractive question-answering model trained with AutoNLP, capable of extracting answers from given text.
Question Answering System
Transformers Other

A
teacookies
16
0
Splinter Base
Apache-2.0
Splinter is a self-supervised pre-trained model specifically designed for few-shot QA tasks, utilizing the Recurring Span Selection (RSS) objective for pre-training.
Question Answering System
Transformers English

S
tau
648
1
Bert Base Multilingual Xquad
A multilingual QA model based on bert-base-multilingual-uncased, fine-tuned on the XQuAD dataset
Question Answering System
Transformers Other

B
alon-albalak
24
0
Vi Mrc Large
XLM-RoBERTa-based extractive QA model for Vietnamese, ranked first in VLSP MRC 2021 evaluation
Question Answering System
Transformers Supports Multiple Languages

V
nguyenvulebinh
879
5
Splinter Base Qass
Apache-2.0
Splinter is a few-shot QA model pre-trained via self-supervised learning, utilizing the Recurrent Span Selection (RSS) objective to simulate the span selection process in extractive QA.
Question Answering System
Transformers English

S
tau
3,048
1
Vi Mrc Base
Vietnamese QA system based on XLM-RoBERTa, supports English, fine-tuned on datasets like SQuAD
Question Answering System
Transformers Supports Multiple Languages

V
nguyenvulebinh
22
16
Roberta Large Bne Sqac
Apache-2.0
This is a RoBERTa large model optimized for Spanish question answering tasks, trained on a large corpus from the Spanish National Library
Question Answering System
Transformers Spanish

R
PlanTL-GOB-ES
966
8
Roberta Base Bne Sqac
Apache-2.0
Spanish Q&A model, fine-tuned from roberta-base-bne, pre-trained on 570GB of text data from the Spanish National Library
Question Answering System
Transformers Spanish

R
PlanTL-GOB-ES
507
4
Bart Squad2
BART-based extractive QA model trained on Squad 2.0 dataset with an F1 score of 87.4
Question Answering System
Transformers English

B
primer-ai
18
2
Featured Recommended AI Models